The Alternating Descent Conditional Gradient Method for Sparse Inverse Problems
نویسندگان
چکیده
منابع مشابه
Alternating proximal gradient method for sparse nonnegative Tucker decomposition
Multi-way data arises inmany applications such as electroencephalography classification, face recognition, text mining and hyperspectral data analysis. Tensor decomposition has been commonly used to find the hidden factors and elicit the intrinsic structures of the multi-way data. This paper considers sparse nonnegative Tucker decomposition (NTD), which is to decompose a given tensor into the p...
متن کاملA Sparse Approximate Inverse Preconditioner for the Conjugate Gradient Method
A method for computing a sparse incomplete factorization of the inverse of a symmetric positive definite matrix A is developed, and the resulting factorized sparse approximate inverse is used as an explicit preconditioner for conjugate gradient calculations. It is proved that in exact arithmetic the preconditioner is well defined if A is an H-matrix. The results of numerical experiments are pre...
متن کاملThe Cyclic Block Conditional Gradient Method for Convex Optimization Problems
In this paper we study the convex problem of optimizing the sum of a smooth function and a compactly supported non-smooth term with a specific separable form. We analyze the block version of the generalized conditional gradient method when the blocks are chosen in a cyclic order. A global sublinear rate of convergence is established for two different stepsize strategies commonly used in this cl...
متن کاملAlternating Direction Method of Multipliers for Linear Inverse Problems
In this paper we propose an iterative method using alternating direction method of multipliers (ADMM) strategy to solve linear inverse problems in Hilbert spaces with a general convex penalty term. When the data is given exactly, we give a convergence analysis of our ADMM algorithm without assuming the existence of a Lagrange multiplier. In case the data contains noise, we show that our method ...
متن کاملSparse Communication for Distributed Gradient Descent
We make distributed stochastic gradient descent faster by exchanging sparse updates instead of dense updates. Gradient updates are positively skewed as most updates are near zero, so we map the 99% smallest updates (by absolute value) to zero then exchange sparse matrices. This method can be combined with quantization to further improve the compression. We explore different configurations and a...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: SIAM Journal on Optimization
سال: 2017
ISSN: 1052-6234,1095-7189
DOI: 10.1137/15m1035793